Building deep learning models with keras

python
datacamp
machine learning
deep learning
keras
Author

kakamana

Published

April 5, 2023

Building deep learning models with keras

Throughout this chapter, you will build deep learning models for both regression and classification using the Keras library. In this chapter, you will learn about the Specify-Compile-Fit workflow, which can be used to make predictions, and by the end of the chapter, you will have all the tools necessary to construct deep neural networks.

This Building deep learning models with keras is part of [Datacamp course: Introduction to Deep Learning in Python] In a wide range of fields such as robotics, natural language processing, image recognition, and artificial intelligence, including AlphaGo, deep learning is the technique behind the most exciting capabilities. As part of this course, you will gain hands-on, practical experience using deep learning with Keras 2.0, the latest version of a cutting-edge Python library for deep learning.

This is my learning experience of data science through DataCamp. These repository contributions are part of my learning journey through my graduate program masters of applied data sciences (MADS) at University Of Michigan, DeepLearning.AI, Coursera & DataCamp. You can find my similar articles & more stories at my medium & LinkedIn profile. I am available at kaggle & github blogs & github repos. Thank you for your motivation, support & valuable feedback.

These include projects, coursework & notebook which I learned through my data science journey. They are created for reproducible & future reference purpose only. All source code, slides or screenshot are intellactual property of respective content authors. If you find these contents beneficial, kindly consider learning subscription from DeepLearning.AI Subscription, Coursera, DataCamp

Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

plt.rcParams['figure.figsize'] = (8, 8)

Creating a keras model

  • Model building steps
    • Specify Architecture
    • Compile
    • Fit
    • Predict

Understanding your data

it’s good to understand your data by performing some exploratory analysis.

Code
df = pd.read_csv('dataset/hourly_wages.csv')
df.head()
wage_per_hour union education_yrs experience_yrs age female marr south manufacturing construction
0 5.10 0 8 21 35 1 1 0 1 0
1 4.95 0 9 42 57 1 1 0 1 0
2 6.67 0 12 1 19 0 0 0 1 0
3 4.00 0 12 4 22 0 0 0 0 0
4 7.50 0 12 17 35 0 1 0 0 0
Code
df.describe()
wage_per_hour union education_yrs experience_yrs age female marr south manufacturing construction
count 534.000000 534.000000 534.000000 534.000000 534.000000 534.000000 534.000000 534.000000 534.000000 534.000000
mean 9.024064 0.179775 13.018727 17.822097 36.833333 0.458801 0.655431 0.292135 0.185393 0.044944
std 5.139097 0.384360 2.615373 12.379710 11.726573 0.498767 0.475673 0.455170 0.388981 0.207375
min 1.000000 0.000000 2.000000 0.000000 18.000000 0.000000 0.000000 0.000000 0.000000 0.000000
25% 5.250000 0.000000 12.000000 8.000000 28.000000 0.000000 0.000000 0.000000 0.000000 0.000000
50% 7.780000 0.000000 12.000000 15.000000 35.000000 0.000000 1.000000 0.000000 0.000000 0.000000
75% 11.250000 0.000000 15.000000 26.000000 44.000000 1.000000 1.000000 1.000000 0.000000 0.000000
max 44.500000 1.000000 18.000000 55.000000 64.000000 1.000000 1.000000 1.000000 1.000000 1.000000

Specifying a model

Now that you have worked with your first Keras model, you will be able to run more complex neural network models on larger datasets than you were able to do in the first two chapters.

To begin, you will take the skeleton of a neural network and add a hidden layer and an output layer. Following the fitting, Keras will perform the optimization so that your model continues to improve.

As a starting point, you will predict workers’ wages based on factors such as their industry, education, and level of experience. A pandas dataframe called df contains the dataset. Everything in df except for the target has been converted to a NumPy matrix called predictors for convenience. In NumPy, wage_per_hour is available as a matrix called target.

Code
import tensorflow as tf

predictors = df.iloc[:, 1:].to_numpy()
target = df.iloc[:, 0].to_numpy()
Code
n_cols = predictors.shape[1]

# Set up the model: model
model = tf.keras.Sequential()

# Add the first layer
model.add(tf.keras.layers.Dense(50, activation='relu', input_shape=(n_cols, )))

# Add the second layer
model.add(tf.keras.layers.Dense(32, activation='relu'))

# Add the output layer
model.add(tf.keras.layers.Dense(1))
Metal device set to: Apple M2 Pro

Compiling and fitting a model

  • Why you need to compile your model
    • Specify the optimizer
      • Many options and mathematically complex
      • “Adam” is usually a good choice
    • Loss function
      • “mean_squared_error”
  • Fitting a model
    • Applying backpropagation and gradient descent with your data to update the weights
    • Scaling data before fitting can ease optimization

Compiling the model

You’re now going to compile the model you specified earlier. To compile the model, you need to specify the optimizer and loss function to use. You can read more about ‘adam’ optimizer as well as other keras optimizers here, and if you are really curious to learn more, you can read the original paper that introduced the Adam optimizer.

Code
model.compile(optimizer='adam', loss='mean_squared_error')

# Verify that model contains information from compiling
print("Loss function: " + model.loss)
Loss function: mean_squared_error

Fitting the model

You have reached the most exciting part of the process. The model will now be fitted. It is important to recall that the data to be used as predictive features is loaded into a NumPy array called predictors, while the data to be predicted is stored in a NumPy array called target

Code
model.fit(predictors, target, epochs=10);
Epoch 1/10
17/17 [==============================] - 2s 14ms/step - loss: 94.7410
Epoch 2/10
17/17 [==============================] - 0s 4ms/step - loss: 30.0728
Epoch 3/10
17/17 [==============================] - 0s 4ms/step - loss: 26.5535
Epoch 4/10
17/17 [==============================] - 0s 4ms/step - loss: 23.3664
Epoch 5/10
17/17 [==============================] - 0s 4ms/step - loss: 22.3079
Epoch 6/10
17/17 [==============================] - 0s 4ms/step - loss: 21.9069
Epoch 7/10
17/17 [==============================] - 0s 4ms/step - loss: 21.7943
Epoch 8/10
17/17 [==============================] - 0s 4ms/step - loss: 21.6478
Epoch 9/10
17/17 [==============================] - 0s 4ms/step - loss: 21.4921
Epoch 10/10
17/17 [==============================] - 0s 4ms/step - loss: 21.5042
2023-04-06 00:03:18.217275: W tensorflow/tsl/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz

Classification models

  • Classification
    • categorical_crossentropy loss function
    • Similar to log loss: Lower is better
    • Add metrics=[‘accuracy’] to compile step for easy-to-understand diagnostics
    • Output layers has separate node for each possible outcome, and uses softmax activation

Understanding your classification data

Now you will start modeling with a new dataset for a classification problem. This data includes information about passengers on the Titanic. You will use predictors such as age, fare and where each passenger embarked from to predict who will survive. This data is from a tutorial on data science competitions. Look here for descriptions of the features.

Code
df = pd.read_csv('dataset/titanic_all_numeric.csv')
df.head()
survived pclass age sibsp parch fare male age_was_missing embarked_from_cherbourg embarked_from_queenstown embarked_from_southampton
0 0 3 22.0 1 0 7.2500 1 False 0 0 1
1 1 1 38.0 1 0 71.2833 0 False 1 0 0
2 1 3 26.0 0 0 7.9250 0 False 0 0 1
3 1 1 35.0 1 0 53.1000 0 False 0 0 1
4 0 3 35.0 0 0 8.0500 1 False 0 0 1
Code
df.describe()
survived pclass age sibsp parch fare male embarked_from_cherbourg embarked_from_queenstown embarked_from_southampton
count 891.000000 891.000000 891.000000 891.000000 891.000000 891.000000 891.000000 891.000000 891.000000 891.000000
mean 0.383838 2.308642 29.699118 0.523008 0.381594 32.204208 0.647587 0.188552 0.086420 0.722783
std 0.486592 0.836071 13.002015 1.102743 0.806057 49.693429 0.477990 0.391372 0.281141 0.447876
min 0.000000 1.000000 0.420000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
25% 0.000000 2.000000 22.000000 0.000000 0.000000 7.910400 0.000000 0.000000 0.000000 0.000000
50% 0.000000 3.000000 29.699118 0.000000 0.000000 14.454200 1.000000 0.000000 0.000000 1.000000
75% 1.000000 3.000000 35.000000 1.000000 0.000000 31.000000 1.000000 0.000000 0.000000 1.000000
max 1.000000 3.000000 80.000000 8.000000 6.000000 512.329200 1.000000 1.000000 1.000000 1.000000

Last steps in classification models

You’ll now create a classification model using the titanic dataset, which has been pre-loaded into a DataFrame called df. You’ll take information about the passengers and predict which ones survived.

The predictive variables are stored in a NumPy array predictors. The target to predict is in df.survived, though you’ll have to manipulate it for keras. The number of predictive features is stored in n_cols.

Here, you’ll use the ‘sgd’ optimizer, which stands for Stochastic Gradient Descent

Code
predictors = df.iloc[:, 1:].astype(np.float32).to_numpy()
target = df.survived.astype(np.float32).to_numpy()
n_cols = predictors.shape[1]
Code
from tensorflow.keras.utils import to_categorical

# Convert the target to categorical: target
target = to_categorical(target)

# Set up the model
model = tf.keras.Sequential()

# Add the first layer
model.add(tf.keras.layers.Dense(32, activation='relu', input_shape=(n_cols, )))

# Add the second layer
model.add(tf.keras.layers.Dense(2, activation='softmax'))

# Compile the model
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])

# Fit the model
model.fit(predictors, target, epochs=10);
Epoch 1/10
28/28 [==============================] - 1s 12ms/step - loss: 2.9438 - accuracy: 0.5859
Epoch 2/10
28/28 [==============================] - 0s 6ms/step - loss: 0.8443 - accuracy: 0.6588
Epoch 3/10
28/28 [==============================] - 0s 6ms/step - loss: 0.7318 - accuracy: 0.6790
Epoch 4/10
28/28 [==============================] - 0s 6ms/step - loss: 0.6796 - accuracy: 0.6936
Epoch 5/10
28/28 [==============================] - 0s 7ms/step - loss: 0.6132 - accuracy: 0.6992
Epoch 6/10
28/28 [==============================] - 0s 6ms/step - loss: 0.6118 - accuracy: 0.6936
Epoch 7/10
28/28 [==============================] - 0s 6ms/step - loss: 0.6035 - accuracy: 0.6992
Epoch 8/10
28/28 [==============================] - 0s 7ms/step - loss: 0.5826 - accuracy: 0.7015
Epoch 9/10
28/28 [==============================] - 0s 6ms/step - loss: 0.5817 - accuracy: 0.6970
Epoch 10/10
28/28 [==============================] - 0s 6ms/step - loss: 0.5917 - accuracy: 0.6891

Using models

  • Using models
    • Save
    • Load
    • Make predictions

Making predictions

The trained network from your previous coding exercise is now stored as model. New data to make predictions is stored in a NumPy array as pred_data. Use model to make predictions on your new data.

In this exercise, your predictions will be probabilities, which is the most common way for data scientists to communicate their predictions to colleagues.

Code
pred_data = pd.read_csv('dataset/titanic_pred.csv').astype(np.float32).to_numpy()
Code
predictions = model.predict(pred_data)

# Calculate predicted probability of survival: predicted_prob_true
predicted_prob_true = predictions[:, 1]

# Print predicted_prob_true
print(predicted_prob_true)
3/3 [==============================] - 0s 5ms/step
[0.28089795 0.5369367  0.8194578  0.43969733 0.2429138  0.22338603
 0.09212727 0.4757859  0.23406765 0.7769634  0.26234588 0.43050936
 0.22952308 0.40452078 0.23025243 0.14099503 0.42768195 0.5521629
 0.13858092 0.36927137 0.8736501  0.2646314  0.09594445 0.37949288
 0.38556737 0.25240952 0.727409   0.56477743 0.2628111  0.9068939
 0.61755836 0.45026004 0.2687303  0.2879128  0.34966743 0.801498
 0.3239651  0.24471253 0.7468566  0.60672486 0.3203911  0.41660646
 0.7004056  0.21700425 0.3683664  0.15297973 0.43684775 0.23067285
 0.57805747 0.83401185 0.34824163 0.04445559 0.5003279  0.6631353
 0.5117625  0.40352866 0.94096553 0.37212968 0.3529382  0.2687303
 0.23548837 0.37054414 0.50150293 0.47871944 0.3895826  0.31755432
 0.53184533 0.74690145 0.2692734  0.4110315  0.26245508 0.7669255
 0.17477356 0.13974458 0.63087267 0.54621756 0.358591   0.33060008
 0.2421917  0.87394214 0.6014025  0.21087462 0.4814627  0.29580712
 0.25925612 0.25033543 0.35517913 0.6868682  0.43503323 0.6165654
 0.23913547]